The second half of the workshop deck expands the same automation model to end-to-end testing. Playwright is presented as the browser layer in the broader pipeline: AI generates UI tests, GitHub Actions runs them, and HTML reports, screenshots, and traces become validation artifacts instead of just developer debugging tools.
The deck positions Playwright as the browser-driven complement to unit and integration testing, not a replacement for them.
One test suite can drive Chromium, Firefox, and WebKit so teams validate user-visible behavior without multiplying authoring effort.
Clicking, typing, hovering, uploads, dialogs, and navigation all happen in a real browser context rather than in mocked logic alone.
Playwright validates what the user actually sees: text, visibility, field values, URLs, network results, and browser state after the interaction.
The framework captures what rendered at failure time, which makes the artifact useful for both debugging and validation evidence review.
The workshop uses the same prompt model as Jest generation, just with Playwright-specific locators, routes, and output requirements.
The deck’s code walkthrough is teaching habits as much as syntax: use user-facing locators, let Playwright auto-wait, and attach requirement tags at the test level.
getByRole(), getByLabel(), visible text, and explicit test IDs when needed.expect().toHaveText() and related checks retry until the condition passes or the timeout expires.The deck says to add the screen route and, where relevant, the Figma reference so the model can anchor the browser journey.
That keeps the generated suite readable and makes requirement-tagged reporting cleaner later.
The prompt reference card explicitly recommends attaching failure screenshots and adding accessibility assertions alongside functional ones.
Before trusting a generated Playwright suite in CI, manually harden the two or three selectors most likely to churn. A quick human pass on locators pays back far more than chasing flaky browser runs later.
This is the most important reframing in the Playwright section of the deck: HTML reports, screenshots, traces, and tagged test names can serve as evidence a QA lead can actually review.
The sample GitHub Actions workflow installs browsers, runs the suite, and points at a staging URL rather than production.
The workflow uploads the Playwright report even on failure, so the run always leaves behind reviewable evidence rather than just a red check.
The deck’s example keeps reports for 30 days, explicitly long enough for a GxP review cycle following a release.
Each failed assertion can capture the exact UI state the user would have seen at the moment of error.
The trace file records the execution timeline, including network activity, DOM changes, and screenshots, so failures are reproducible after the run.
With requirement IDs embedded in describe blocks and test names, the report becomes a partial traceability matrix instead of only a technical log.
The implementation plan is intentionally modest so the workflow becomes normal team behavior instead of a one-off workshop exercise.
Run the exercise on one real Hours Tracking function, generate the tests, and open a draft PR to watch the pipeline execute end to end.
Add a coverage threshold gate in GitHub Actions. The deck suggests starting at 80% before moving toward the stronger standard.
Generate a traceability report from the requirement tags in merged PRs and bring it into Becky’s review process.
Every new feature gets a test prompt and a test file. The deck’s message is that this becomes the default workflow, not an occasional accelerator.
| Standard | Before (Manual) | After (Automated) |
|---|---|---|
| GL-IT-02-S25 §4.3Test Evidence | Screenshots and manual test logs, assembled by hand | GitHub Actions produces a signed, timestamped test artifact on every run |
| GL-IT-09-S28 §5.1Req. Traceability | Spreadsheet maps requirements to test cases — updated manually | REQ tags embedded in test names; traceability report generated automatically |
| GAMP 5 Cat 4/5Validation Coverage | Test scripts written after code; coverage unknown until audit | Coverage threshold enforced as a PR gate; no merge below 90% |
| SOx IT ControlsChange Control | Test results attached to change tickets manually, sometimes post-facto | Test results are a mandatory PR artifact — change cannot close without them |